233 research outputs found

    A Simple Proof of the Entropy-Power Inequality via Properties of Mutual Information

    Full text link
    While most useful information theoretic inequalities can be deduced from the basic properties of entropy or mutual information, Shannon's entropy power inequality (EPI) seems to be an exception: available information theoretic proofs of the EPI hinge on integral representations of differential entropy using either Fisher's information (FI) or minimum mean-square error (MMSE). In this paper, we first present a unified view of proofs via FI and MMSE, showing that they are essentially dual versions of the same proof, and then fill the gap by providing a new, simple proof of the EPI, which is solely based on the properties of mutual information and sidesteps both FI or MMSE representations.Comment: 5 pages, accepted for presentation at the IEEE International Symposium on Information Theory 200

    Information Theoretic Proofs of Entropy Power Inequalities

    Full text link
    While most useful information theoretic inequalities can be deduced from the basic properties of entropy or mutual information, up to now Shannon's entropy power inequality (EPI) is an exception: Existing information theoretic proofs of the EPI hinge on representations of differential entropy using either Fisher information or minimum mean-square error (MMSE), which are derived from de Bruijn's identity. In this paper, we first present an unified view of these proofs, showing that they share two essential ingredients: 1) a data processing argument applied to a covariance-preserving linear transformation; 2) an integration over a path of a continuous Gaussian perturbation. Using these ingredients, we develop a new and brief proof of the EPI through a mutual information inequality, which replaces Stam and Blachman's Fisher information inequality (FII) and an inequality for MMSE by Guo, Shamai and Verd\'u used in earlier proofs. The result has the advantage of being very simple in that it relies only on the basic properties of mutual information. These ideas are then generalized to various extended versions of the EPI: Zamir and Feder's generalized EPI for linear transformations of the random variables, Takano and Johnson's EPI for dependent variables, Liu and Viswanath's covariance-constrained EPI, and Costa's concavity inequality for the entropy power.Comment: submitted for publication in the IEEE Transactions on Information Theory, revised versio

    R\'enyi Entropy Power Inequalities via Normal Transport and Rotation

    Full text link
    Following a recent proof of Shannon's entropy power inequality (EPI), a comprehensive framework for deriving various EPIs for the R\'enyi entropy is presented that uses transport arguments from normal densities and a change of variable by rotation. Simple arguments are given to recover the previously known R\'enyi EPIs and derive new ones, by unifying a multiplicative form with constant c and a modification with exponent {\alpha} of previous works. In particular, for log-concave densities, we obtain a simple transportation proof of a sharp varentropy bound.Comment: 17 page. Entropy Journal, to appea

    Yet Another Proof of the Entropy Power Inequality

    Full text link
    Yet another simple proof of the entropy power inequality is given, which avoids both the integration over a path of Gaussian perturbation and the use of Young's inequality with sharp constant or R\'enyi entropies. The proof is based on a simple change of variables, is formally identical in one and several dimensions, and easily settles the equality case

    At Every Corner: Determining Corner Points of Two-User Gaussian Interference Channels

    Full text link
    The corner points of the capacity region of the two-user Gaussian interference channel under strong or weak interference are determined using the notions of almost Gaussian random vectors, almost lossless addition of random vectors, and almost linearly dependent random vectors. In particular, the "missing" corner point problem is solved in a manner that differs from previous works in that it avoids the use of integration over a continuum of SNR values or of Monge-Kantorovitch transportation problems

    Equality in the Matrix Entropy-Power Inequality and Blind Separation of Real and Complex sources

    Full text link
    The matrix version of the entropy-power inequality for real or complex coefficients and variables is proved using a transportation argument that easily settles the equality case. An application to blind source extraction is given.Comment: 5 pages, in Proc. 2019 IEEE International Symposium on Information Theory (ISIT 2019), Paris, France, July 7-12, 201

    Une théorie mathématique de la communication

    Get PDF
    Dans ce texte fondateur de la thĂ©orie de l’information, Shannon dĂ©finit la notion de communication, la fonde sur celle de probabilitĂ©, dĂ©finit le terme bit comme mesure logarithmique de l’information, ainsi que la notion d’entropie informatique (par analogie avec celle de Boltzmann en physique statistique). Il dĂ©finit aussi mathĂ©matiquement la capacitĂ© d’un canal de transmission : on peut transmettre l’information de façon fiable tant que le dĂ©bit ne dĂ©passe pas cette capacitĂ© – le bruit prĂ©sent dans le canal ne limite pas la qualitĂ© de la communication, mais uniquement le dĂ©bit de transmission

    On Shannon's formula and Hartley's rule: beyond the mathematical coincidence

    Get PDF
    In the information theory community, the following “historical“ statements are generally well accepted: (1) Hartley did put forth his rule twenty years before Shannon; (2) Shannon's formula as a fundamental tradeoff between transmission rate, bandwidth, and signal-to-noise ratio came out unexpected in 1948; (3) Hartley's rule is inexact while Shannon's formula is characteristic of the additive white Gaussian noise channel; (4) Hartley's rule is an imprecise relation that is not an appropriate formula for the capacity of a communication channel. We show that all these four statements are somewhat wrong. In fact, a careful calculation shows that “Hartley's rule“ in fact coincides with Shannon's formula. We explain this mathematical coincidence by deriving the necessary and sufficient conditions on an additive noise channel such that its capacity is given by Shannon's formula and construct a sequence of such channels that makes the link between the uniform (Hartley) and Gaussian (Shannon) channels.In the information theory community, the following “historical” statements are generally well accepted: (1) Hartley did put forth his rule twenty years before Shannon; (2) Shannon’s formula as a fundamental tradeoff between transmission rate, bandwidth, a16948921910FAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO2014/13835-6 ; 2013/25977-
    • 

    corecore